matrix element
A Prompt completion algorithm
Algorithm 2 describes the prompt completion algorithm introduced in Section 2.2. Algorithm 3 is a variant of the rebinding Algorithm 1 that does not use EM. This decoded clone (and all the clones in its clone set) are then rapidly bound to emit the surprise. Add the pseudocount ϵ to the initial emission matrix and normalize its rows. Figure 9: A. Transition graph of the learned CSCG model with overallocation ratio We present below the tables of results associated with Figure 1.
Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials
Yin, Shi, Dai, Zujian, Pan, Xinyang, He, Lixin
Deep learning methods for electronic-structure Hamiltonian prediction has offered significant computational efficiency advantages over traditional density functional theory (DFT), yet the diversity of atomic types, structural patterns, and the high-dimensional complexity of Hamiltonians pose substantial challenges to the generalization performance. In this work, we contribute on both the methodology and dataset sides to advance universal deep learning paradigm for Hamiltonian prediction. On the method side, we propose NextHAM, a neural E(3)-symmetry and expressive correction method for efficient and generalizable materials electronic-structure Hamiltonian prediction. First, we introduce the zeroth-step Hamiltoni-ans, which can be efficiently constructed by the initial charge density of DFT, as informative descriptors of neural regression model in the input level and initial estimates of the target Hamiltonian in the output level, so that the regression model directly predicts the correction terms to the target ground truths, thereby significantly simplifying the input-output mapping and facilitating fine-grained predictions. Second, we present a neural Transformer architecture with strict E(3)-Symmetry and high non-linear expressiveness for Hamiltonian prediction. Third, we propose a novel training objective to ensure the accuracy performance of Hamiltonians in both real space and reciprocal space, preventing error amplification and the occurrence of "ghost states" caused by the large condition number of the overlap matrix. Experimental results on Materials-HAM-SOC demonstrate that NextHAM achieves excellent accuracy in predicting Hamiltonians and band structures, with spin-off-diagonal block reaching the accuracy of sub-µeV scale. These results establish NextHAM as a universal and highly accurate deep learning model for electronic-structure prediction, delivering DFT -level precision with dramatically improved computational efficiency. Understanding the electronic structure is fundamental to unraveling how electrons govern the properties of condensed matter systems. This knowledge is essential for predicting a wide range of material characteristics, such as electrical conductivity, magnetism, optical behavior, and chemical activity, which are vital for technologies spanning from electronics to sustainable energy and advanced catalysis. At the heart of these calculations is the challenge of determining the system's Hamiltonian matrix, whose eigenvalues and eigenstates yield important quantities like energy levels, band structures, and electronic wavefunctions. Traditionally, Density Functional Theory (DFT) (Hohenberg & Kohn, 1964; Kohn & Sham, 1965) has been the standard approach for these problems. Recently, deep learning has emerged as a powerful tool in the physical sciences (Zhang et al., 2025).
- North America > United States > Maryland > Baltimore (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
AI-Based Impedance Encoding-Decoding Method for Online Impedance Network Construction of Wind Farms
Zhang, Xiaojuan, Jiang, Tianyu, Zong, Haoxiang, Zhang, Chen, Li, Chendan, Molinas, Marta
The impedance network (IN) model is gaining popularity in the oscillation analysis of wind farms. However, the construction of such an IN model requires impedance curves of each wind turbine under their respective operating conditions, making its online application difficult due to the transmission of numerous high-density impedance curves. To address this issue, this paper proposes an AI-based impedance encoding-decoding method to facilitate the online construction of IN model. First, an impedance encoder is trained to compress impedance curves by setting the number of neurons much smaller than that of frequency points. Then, the compressed data of each turbine are uploaded to the wind farm and an impedance decoder is trained to reconstruct original impedance curves. At last, based on the nodal admittance matrix (NAM) method, the IN model of the wind farm can be obtained. The proposed method is validated via model training and real-time simulations, demonstrating that the encoded impedance vectors enable fast transmission and accurate reconstruction of the original impedance curves.
Estimation of the reduced density matrix and entanglement entropies using autoregressive networks
Białas, Piotr, Korcyl, Piotr, Stebel, Tomasz, Zapolski, Dawid
We present an application of autoregressive neural networks to Monte Carlo simulations of quantum spin chains using the correspondence with classical two-dimensional spin systems. We use a hierarchy of neural networks capable of estimating conditional probabilities of consecutive spins to evaluate elements of reduced density matrices directly. Using the Ising chain as an example, we calculate the continuum limit of the ground state's von Neumann and Rényi bipartite entanglement entropies of an interval built of up to 5 spins. We demonstrate that our architecture is able to estimate all the needed matrix elements with just a single training for a fixed time discretization and lattice volume. Our method can be applied to other types of spin chains, possibly with defects, as well as to estimating entanglement entropies of thermal states at non-zero temperature.
Reducing Storage of Pretrained Neural Networks by Rate-Constrained Quantization and Entropy Coding
Conzelmann, Alexander, Bamler, Robert
The ever-growing size of neural networks poses serious challenges on resource-constrained devices, such as embedded sensors. Compression algorithms that reduce their size can mitigate these problems, provided that model performance stays close to the original. We propose a novel post-training compression framework that combines rate-aware quantization with entropy coding by (1) extending the well-known layer-wise loss by a quadratic rate estimation, and (2) providing locally exact solutions to this modified objective following the Optimal Brain Surgeon (OBS) method. Our method allows for very fast decoding and is compatible with arbitrary quantization grids. We verify our results empirically by testing on various computer-vision networks, achieving a 20-40\% decrease in bit rate at the same performance as the popular compression algorithm NNCodec. Our code is available at https://github.com/Conzel/cerwu.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Advanced deep architecture pruning using single filter performance
Tzach, Yarden, Meir, Yuval, Gross, Ronit D., Tevet, Ofek, Koresh, Ella, Kanter, Ido
Pruning the parameters and structure of neural networks reduces the computational complexity, energy consumption, and latency during inference. Recently, a novel underlying mechanism for successful deep learning (DL) was presented based on a method that quantitatively measures the single filter performance in each layer of a DL architecture, and a new comprehensive mechanism of how deep learning works was presented. Herein, we demonstrate how this understanding paves the path to highly dilute the convolutional layers of deep architectures without affecting their overall accuracy using applied filter cluster connections (AFCC). AFCC is exemplified on VGG-11 and EfficientNet-B0 architectures trained on CIFAR-100, and its high pruning outperforms other techniques using the same pruning magnitude. Additionally, this technique is broadened to single nodal performance and highly pruning of fully connected layers, suggesting a possible implementation to considerably reduce the complexity of over-parameterized AI tasks.
- Asia > Middle East > Israel (0.04)
- Asia > Middle East > Jordan (0.04)
Flow Annealed Importance Sampling Bootstrap meets Differentiable Particle Physics
Kofler, Annalena, Stimper, Vincent, Mikhasenko, Mikhail, Kagan, Michael, Heinrich, Lukas
High-energy physics requires the generation of large numbers of simulated data samples from complex but analytically tractable distributions called matrix elements. Surrogate models, such as normalizing flows, are gaining popularity for this task due to their computational efficiency. We adopt an approach based on Flow Annealed importance sampling Bootstrap (FAB) that evaluates the differentiable target density during training and helps avoid the costly generation of training data in advance. We show that FAB reaches higher sampling efficiency with fewer target evaluations in high dimensions in comparison to other methods.
- North America > United States (0.28)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- (2 more...)
Natural gradient and parameter estimation for quantum Boltzmann machines
Patel, Dhrumil, Wilde, Mark M.
Thermal states play a fundamental role in various areas of physics, and they are becoming increasingly important in quantum information science, with applications related to semi-definite programming, quantum Boltzmann machine learning, Hamiltonian learning, and the related task of estimating the parameters of a Hamiltonian. Here we establish formulas underlying the basic geometry of parameterized thermal states, and we delineate quantum algorithms for estimating the values of these formulas. More specifically, we prove formulas for the Fisher--Bures and Kubo--Mori information matrices of parameterized thermal states, and our quantum algorithms for estimating their matrix elements involve a combination of classical sampling, Hamiltonian simulation, and the Hadamard test. These results have applications in developing a natural gradient descent algorithm for quantum Boltzmann machine learning, which takes into account the geometry of thermal states, and in establishing fundamental limitations on the ability to estimate the parameters of a Hamiltonian, when given access to thermal-state samples. For the latter task, and for the special case of estimating a single parameter, we sketch an algorithm that realizes a measurement that is asymptotically optimal for the estimation task. We finally stress that the natural gradient descent algorithm developed here can be used for any machine learning problem that employs the quantum Boltzmann machine ansatz.
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Greece (0.04)
Applications of flow models to the generation of correlated lattice QCD ensembles
Abbott, Ryan, Botev, Aleksandar, Boyda, Denis, Hackett, Daniel C., Kanwar, Gurtej, Racanière, Sébastien, Rezende, Danilo J., Romero-López, Fernando, Shanahan, Phiala E., Urban, Julian M.
Machine-learned normalizing flows can be used in the context of lattice quantum field theory to generate statistically correlated ensembles of lattice gauge fields at different action parameters. This work demonstrates how these correlations can be exploited for variance reduction in the computation of observables. Three different proof-of-concept applications are demonstrated using a novel residual flow architecture: continuum limits of gauge theories, the mass dependence of QCD observables, and hadronic matrix elements based on the Feynman-Hellmann approach. In all three cases, it is shown that statistical uncertainties are significantly reduced when machine-learned flows are incorporated as compared with the same calculations performed with uncorrelated ensembles or direct reweighting.
- North America > United States > Massachusetts (0.14)
- Europe > Switzerland (0.14)
- Government > Regional Government (0.46)
- Energy > Oil & Gas > Upstream (0.34)